AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-round Iterative Fine-tuning

# Multi-round Iterative Fine-tuning

Gemma 2 9B It SPPO Iter3
An 8.9 billion parameter language model developed in the third iteration using self-play preference optimization, starting from google/gemma-2-9b-it and fine-tuned with the UltraFeedback dataset
Large Language Model Transformers English
G
UCLA-AGI
6,704
125
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase